Andrea Horbach

Prof. Dr. Andrea Horbach Foto: Henrik Schipper

Prof. Dr. Andrea Horbach

Professorin "Lehren und Lernen in der Digitalen Welt" Christian-Albrechts-Universität zu Kiel und IPN, Leiterin der Nachwuchsgruppe EduNLP

E-Mail: andrea.horbach

Was ist meine Rolle in CATALPA ?

Als Computerlinguistin leite ich die Nachwuchsgruppe EduNLP. Mich fasziniert, wie der Computer menschliche Sprache analysieren, verstehen und selbst produzieren kann - obwohl Sprache so komplex und mehrdeutig ist. Ich möchte durch automatische Sprachverarbeitung Lernende dabei unterstützen, bessere Texte zu schreiben, und Lehrenden ermöglichen, Texte effizienter auszuwerten.

Warum CATALPA ?

Für uns Menschen ist Sprache in den meisten Situationen das Kommunikationsmittel der Wahl. Gerade in der online-Lehre musste man bisher häufig auf „sprachfreie“ Aufgabenformate ausweichen, weil sie der Computer besser automatisch auswerten kann. Ich möchte in CATALPA dazu beitragen, dass digitale Lehre sich an den Anforderungen der Lernenden ausrichten kann und sich nicht der technischen Machbarkeit unterordnen muss.

    • Professorin für "Lehren und Lernen in der Digitalen Welt" bei Christian-Albrechts-Universität zu Kiel (CAU) und Leibniz-Institut für die Pädagogik der Naturwissenschaften und Mathematik (IPN) (seit 09/2024)
    • Juniorprofessorin für Digitale Geisteswissenschaften bei Stiftung Universität Hildesheim (04/2023 - 09/2024)
    • Leiterin der Nachwuchsforschungsgruppe “Educational Natural Language Processing” bei CATALPA (vormals D²L² “Digitalisierung, Diversität und Lebenslanges Lernen. Konsequenzen für die Hochschulbildung“), FernUniversität in Hagen (seit 12/2021)
    • Wissenschaftliche Mitarbeiterin, Language Technology Lab, Universität Duisburg-Essen (10/2016 - 11/2021)
    • PhD in Computerlinguistik, Universität des Saarlandes, Saarbrücken (2018)
    • Wissenschaftliche Mitarbeiterin/Doktorandin am Institut für Computerlinguistik, Universität des Saarlandes, Saarbrücken (04/2010 – 09/2016)
    • Diplom in Computerlinguistik, Universität des Saarlandes, Saarbrücken (2008)
    • Sprachverarbeitung für Bildungsanwendungen
    • Automatische Bewertung von Freitextaufgaben
    • Aufgaben- und Feedbackgenerierung
  • 2024

    Zeitschriftenartikel

    • Jansen, T., Meyer, J., Fleckenstein, J., Horbach, A., Keller, S., & Möller, J. (2024). Individualizing goal-setting interventions using automated writing evaluation to support secondary school students’ text revisions. Learning and Instruction, 89, 101847.
    • Meyer, J., Jansen, T., Schiller, R., Liebenow, L. W., Steinbach, M., Horbach, A., & Fleckenstein, J. (2024). Using LLMs to bring evidence-based feedback into the classroom: AI-generated feedback increases secondary students’ text revision, motivation, and positive emotions. Computers and Education: Artificial Intelligence, 6, 100199.
    • Schaller, N.-J., Horbach, A., Höft, L. I., Ding, Y., Bahr, J. L., Meyer, J., & Jansen, T. (2024). DARIUS: A Comprehensive Learner Corpus for Argument Mining in German-Language Essays.
    • Shin, H. J., Andersen, N., Horbach, A., Kim, E., Baik, J., & Zehner, F. (2024). Operational Automatic Scoring of Text Responses in 2016 ePIRLS: Performance and Linguistic Variance.

    Konferenzbeiträge

    • Bexte, M., Horbach, A., & Zesch, T. (2024a). EVil-Probe - a Composite Benchmark for Extensive Visio-Linguistic Probing. In N. Calzolari, M.-Y. Kan, V. Hoste, A. Lenci, S. Sakti, & N. Xue (Hrsg.), Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (S. 6682–6700). ELRA; ICCL. https://aclanthology.org/2024.lrec-main.591
    • Bexte, M., Horbach, A., & Zesch, T. (2024b). Rainbow – A Benchmark for Systematic Testing of How Sensitive Visio-Linguistic Models are to Color Naming. In Y. Graham & M. Purver (Hrsg.), 18th Conference of the European Chapter of the Association for Computational Linguistics (S. 1858–1875). Association for Computational Linguistics. https://aclanthology.org/2024.eacl-long.112/
    • Ding, Y., Kashefi, O., Somasundaran, S., & Horbach, A. (2024). When Argumentation Meets Cohesion: Enhancing Automatic Feedback in Student Writing. Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), 17513–17524.
    • Shardlow, M., Alva-Manchego, F., Batista-Navarro, R. T., Bott, S., Ramirez, S. C., Cardon, R., François, T., Hayakawa, A., Horbach, A., Huelsing, A., et al. (2024). An Extensible Massively Multilingual Lexical Simplification Pipeline Dataset using the MultiLS Framework. Proceedings of the 3rd Workshop on Tools and Resources for People with REAding DIfficulties (READI)@ LREC-COLING 2024, 38–46.

    Vorträge und Posterpräsentationen

    • Wehrhahn, F., Ding, Y., Gaschler, R., Zhao, F., & Horbach, A. (2024, Juni 26–28). Argumentative essay writing practice with automated feedback and highlighting. [Poster Presentation]. EARLI SIG WRITING 2024 – ways2write, Université Paris Nanterre, France.

    2023

    Zeitschriftenartikel

    • Horbach, A., Pehlke, J., Laarmann-Quante, R., & Ding, Y. (2023). Crosslingual content scoring in five languages using machine-translation and multilingual transformer models. International Journal of Artificial Intelligence in Education, 1–27.
    • Zesch, T., Horbach, A., & Zehner, F. (2023). To Score or Not to Score: Factors Influencing Performance and Feasibility of Automatic Content Scoring of Text Responses. Educational Measurement: Issues and Practice, 42(1), 44–58. https://doi.org/10.1111/emip.12544

    Konferenzbeiträge

    • Bexte, M., Horbach, A., & Zesch, T. (2023). Similarity-Based Content Scoring - A more Classroom-Suitable Alternative to Instance-Based Scoring? Findings of the Association for Computational Linguistics: ACL 2023, 1892–1903. https://aclanthology.org/2023.findings-acl.119
    • Ding, Y., Bexte, M., & Horbach, A. (2023a). CATALPA_EduNLP at PragTag-2023. In M. Alshomary, C.-C. Chen, S. Muresan, J. Park, & J. Romberg (Hrsg.), Proceedings of the 10th Workshop on Argument Mining (S. 197–201). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.argmining-1.22
    • Ding, Y., Bexte, M., & Horbach, A. (2023b). Score It All Together: A Multi-Task Learning Study on Automatic Scoring of Argumentative Essays. Findings of the Association for Computational Linguistics: ACL 2023, 13052–13063. https://aclanthology.org/2023.findings-acl.825
    • Ding, Y., Trüb, R., Fleckenstein, J., Keller, S., & Horbach, A. (2023). Sequence Tagging in EFL Email Texts as Feedback for Language Learners. Proceedings of the 12th Workshop on NLP for Computer Assisted Language Learning, 53–62.
    • Mousa, A., Laarmann-Quante, R., & Horbach, A. (2023). Manual and Automatic Identification of Similar Arguments in EFL Learner Essays. Proceedings of the 12th Workshop on NLP for Computer Assisted Language Learning, 85–93.

    Herausgeberschaften

    • Kochmar, E., Burstein, J., Horbach, A., Laarmann-Quante, R., Madnani, N., Tack, A., Yaneva, V., Yuan, Z., & Zesch, T. (2023). Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023).

    Vorträge und Posterpräsentationen

    • Zehner, F., Zesch, T., & Horbach, A. (2023a, Februar 28–März 2). Mehr als nur Technologie- und Fairnessfrage: Ethische Prinzipien beim automatischen Bewerten von Textantworten aus Tests [Paper Presentation]. 10th GEBF Annual conference, Universität Duisburg-Essen.
    • Zehner, F., Zesch, T., & Horbach, A. (2023b, Februar 28–März 2). To Score or Not to Score? Machbarkeits- und Performanzfaktoren für automatisches Scoring von Textantworten [Paper Presentation]. 10th GEBF Annual conference, Universität Duisburg-Essen.

    2022

    Konferenzbeiträge

    • Bexte, M., Horbach, A., & Zesch, T. (2022). Similarity-Based Content Scoring - How to Make S-BERT Keep Up With BERT. Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022), 118–123. https://aclanthology.org/2022.bea-1.16
    • Bexte, M., Laarmann-Quante, R., Horbach, A., & Zesch, T. (2022). LeSpell - A Multi-Lingual Benchmark Corpus of Spelling Errors to Develop Spellchecking Methods for Learner Language. Proceedings of the Language Resources and Evaluation Conference, 697–706. https://aclanthology.org/2022.lrec-1.73
    • Ding, Y., Bexte, M., & Horbach, A. (2022). Don’t Drop the Topic - The Role of the Prompt in Argument Identification in Student Writing. Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022), 124–133. https://aclanthology.org/2022.bea-1.17
    • Horbach, A., Laarmann-Quante, R., Liebenow, L., Jansen, T., Keller, S., Meyer, J., Zesch, T., & Fleckenstein, J. (2022). Bringing Automatic Scoring into the Classroom–Measuring the Impact of Automated Analytic Feedback on Student Writing Performance. Swedish Language Technology Conference and NLP4CALL, 72–83. https://ecp.ep.liu.se/index.php/sltc/article/view/580/550
    • Laarmann-Quante, R., Schwarz, L., Horbach, A., & Zesch, T. (2022). ‘Meet me at the ribary’ – Acceptability of spelling variants in free-text answers to listening comprehension prompts. Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022), 173–182. https://aclanthology.org/2022.bea-1.22

    Buchbeiträge

    • Horbach, A. (2022). Werkzeuge für die automatische Sprachanalyse. In M. Beißwenger, L. Lemnitzer, & C. Müller-Spitzer (Hrsg.), Forschen in der Linguistik. Eine Methodeneinführung für das Germanistik-Studium. Wilhelm Fink (UTB).

    2021

    Zeitschriftenartikel

    Konferenzbeiträge

    • Bexte, M., Horbach, A., & Zesch, T. (2021). Implicit Phenomena in Short-answer Scoring Data. Proceedings of the First Workshop on Understanding Implicit and Underspecified Language.
    • Haring, C., Lehmann, R., Horbach, A., & Zesch, T. (2021). C-Test Collector: A Proficiency Testing Application to Collect Training Data for C-Tests. Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Applications, 180–184. https://www.aclweb.org/anthology/2021.bea-1.19

    2020

    Zeitschriftenartikel

    Konferenzbeiträge

    • Ding, Y., Horbach, A., Wang, H., Song, X., & Zesch, T. (2020). Chinese Content Scoring: Open-Access Datasets and Features on Different Segmentation Levels. Proceedings of the 1st conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing(AACL-IJCNLP 2020). https://www.aclweb.org/anthology/2020.aacl-main.37.pdf
    • Ding, Y., Riordan, B., Horbach, A., Cahill, A., & Zesch, T. (2020). Don’t take "nswvtnvakgxpm" for an answer - The surprising vulnerability of automatic content scoring systems to adversarial input. Proceedings of the 28th International Conference on Computational Linguistics(COLING 2020). https://www.aclweb.org/anthology/2020.coling-main.76.pdf
    • Horbach, A., Aldabe, I., Bexte, M., Lacalle, O. de, & Maritxalar, M. (2020). Appropriateness and Pedagogic Usefulness of Reading Comprehension Questions. Proceedings of the 12th International Conference on Language Resources and Evaluation (LREC-2020). https://www.aclweb.org/anthology/2020.lrec-1.217.pdf

    2019

    Zeitschriftenartikel

    2018

    Konferenzbeiträge

    • Horbach, A., & Pinkal, M. (2018). Semi-Supervised Clustering for Short Answer Scoring. LREC. http://www.lrec-conf.org/proceedings/lrec2018/pdf/427.pdf
    • Horbach, A., Stennmanns, S., & Zesch, T. (2018). Cross-lingual Content Scoring. Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, 410–419. http://www.aclweb.org/anthology/W18-0550
    • Zesch, T., & Horbach, A. (2018). ESCRITO - An NLP-Enhanced Educational Scoring Toolkit. Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018). http://www.lrec-conf.org/proceedings/lrec2018/pdf/590.pdf
    • Zesch, T., Horbach, A., Goggin, M., & Wrede-Jackes, J. (2018). A flexible online system for curating reduced redundancy language exercises and tests. In P. Taalas, J. Jalkanen, L. Bradley, & S. Thouësny (Hrsg.), Future-proof CALL: language learning as exploration and encounters - short papers from EUROCALL 2018 (S. 319–324). https://doi.org/10.14705/rpnet.2018.26.857

    2017

    Konferenzbeiträge

    • Horbach, A., Ding, Y., & Zesch, T. (2017). The Influence of Spelling Error on Content Scoring Performance. Proceedings of the 4th Workshop on Natural Language Processing Techniques for Educational Applications, 45–53. http://www.aclweb.org/anthology/W17-5908
    • Horbach, A., Scholten-Akoun, D., Ding, Y., & Zesch, T. (2017). Fine-grained essay scoring of a complex writing task for native speakers. Proceedings of the Building Educational Applications Workshop at EMNLP, 357–366. http://aclweb.org/anthology/W17-5040
    • Riordan, B., Horbach, A., Cahill, A., Zesch, T., & Lee, C. M. (2017). Investigating neural architectures for short answer scoring. Proceedings of the Building Educational Applications Workshop at EMNLP, 159–168. http://www.aclweb.org/anthology/W17-5017
  • Eine vollständige Liste meiner Publikationen findet sich auf Google Scholar.

    Keiper, L., Horbach, A., & Thater, S. (2016). Improving POS Tagging of German Learner Language in a Reading Comprehension Scenario. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016) (pp. 198–205). Portorož, Slovenia: European Language Resources Association (ELRA). Retrieved from https://www.aclweb.org/anthology/L16-1030

    Horbach, A., & Palmer, A. (2016). Investigating Active Learning for Short-Answer Scoring. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications (pp. 301–311). San Diego, CA: Association for Computational https://aclanthology.org/W16-0535/

    Horbach, A., Thater, S., Steffen, D., Fischer, P. M., Witt, A., & Pinkal, M. (2015). Internet corpora: A challenge for linguistic processing. Datenbank-Spektrum, 15(1), 41–47. https://link.springer.com/article/10.1007%2Fs13222-014-0172-z

    Ostermann, S., Horbach, A., & Pinkal, M. (2015). Annotating Entailment Relations for Shortanswer Questions. In Proceedings of the 2nd Workshop on Natural Language Processing Techniques for Educational Applications (pp. 49–58). Beijing, China: Association for Computational Linguistics. https://aclanthology.org/W15-4408/

    Horbach, A., Poitz, J., & Palmer, A. (2015). Using Shallow Syntactic Features to Measure Influences of L1 and Proficiency Level in EFL Writings. In Proceedings of the fourth workshop on NLP for computer-assisted language learning (pp. 21–34). Vilnius, Lithuania: LiU Electronic Press. https://www.aclweb.org/anthology/W15-1903

    Koleva, N., Horbach, A., Palmer, A., Ostermann, S., & Pinkal, M. (2014). Paraphrase Detection for Short Answer Scoring. In Proceedings of the third workshop on NLP for computer-assisted language learning (pp. 59–73). Uppsala, Sweden: LiU Electronic Press. https://www.aclweb.org/anthology/W14-3505

    Horbach, A., Palmer, A., & Wolska, M. (2014). Finding a Tradeoff between Accuracy and Rater’s Workload in Grading Clustered Short Answers. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014) (pp. 588–595). Reykjavik, Iceland: European Languages Resources Association (ELRA). http://www.lrec-conf.org/proceedings/lrec2014/pdf/887_Paper.pdf

    Horbach, A., Palmer, A., & Pinkal, M. (2013). Using the text to evaluate short answers for reading comprehension exercises. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity (pp. 286–295). Atlanta, Georgia, USA: Association for Computational Linguistics. https://www.aclweb.org/anthology/S13-1041

  • Wenn Sie Interesse an einer Abschlussarbeit im Bereich Educational NLP haben, sprechen Sie mich gerne an!

    Die folgende Liste von vergangenen Abschlussarbeiten, die ich betreut habe, zeigt Ihnen beispielhaft die Bandbreite möglicher Themen:

    • Evaluation of picture description tasks using visio-linguistic neural models (Marie Bexte 2021)
    • The influence of vocabulary features on the automatic evaluation of English learner essays (Viet Phe Nguyen, 2021)
    • Comparative visualization of essays (Tim Ludwig, 2020)
    • Investigating transformer-based methods for short answer scoring (Ahmed Nahzan Ilyas ​​​​​​, 2020)
    • Influence of grammatical error correction on Chinese Essay Scoring(Bingxin Chen, 2020)
    • Methods for automatically classifying errors in one-word answers to listening comprehension tasks (Frederik Wollatz, 2020)
    • Bootstrapping a conversational tutor by semi-automatically analyzing interaction data (Ankita Mandal, 2020)
    • English-Chinese cross-lingual scoring of short answer questions (Xuefeng Song, 2019)
    • Chinese Short Answer Scoring (Haoshi Wang, 2019)
    • Adversarial Examples for Evaluating Automatic Content Scoring Systems (Yuning Ding, 2019)
    • Cross-lingual content scoring (Sebastian Stennmanns, 2018)
    • Topic-sensitive methods for automatic spelling correction (Ruishen Liu, 2018)
    • Cross-task scoring of complex writing tasks using domain adaptation and task-independent features (Marie Bexte, 2018)
    • A Comparative Evaluation of German Grapheme-to-Phoneme Conversion Libraries (Rüdiger Fröhlich, 2018)
    • The influence of spelling errors on the performance of short-answer scoring systems (Yuning Ding, 2017)
    • The impact of language errors and the performance of native language identification (Yufei Mu, 2017)
    • Exploring the Role of Textual Entailment for Short Answer Scoring (Simon Ostermann, 2015)
    • Improving POS Tagging of German Learner Language in a Reading Comprehension Scenario (Lena Keiper, 2015)
    • Applying POS-Based Language Models of Learner Data for Native Language Identification and Error Detection (Jonathan Poitz, 2014)
    • Paraphrase Fragment Extraction for German with Applications for Short Answer Scoring (Nikolina Koleva, 2014)