CzeGPT-2 – Training New Model for Czech Generative Text Processing Evaluated with the Summarization Task

Varování

Publikace nespadá pod Filozofickou fakultu, ale pod Fakultu informatiky. Oficiální stránka publikace je na webu muni.cz.
Autoři

HÁJEK Adam HORÁK Aleš

Rok publikování 2024
Druh Článek v odborném periodiku
Časopis / Zdroj IEEE ACCESS
Fakulta / Pracoviště MU

Fakulta informatiky

Citace
www https://ieeexplore.ieee.org/document/10453575
Doi http://dx.doi.org/10.1109/ACCESS.2024.3371689
Klíčová slova Task analysis;Training;Measurement;Transformers;Decoding;Computational modeling;Vocabulary;Czech;GPT-2;large language model;model evaluation;model training;summarization
Popis Automatic text summarization (ATS), alongside neural machine translation or question answering, is one of the leading tasks in Natural Language Processing (NLP). In recent years, ATS has experienced significant development, especially in the English NLP world. Modern approaches are mainly based on the versatile Transformer architecture proposed by Vaswani et al. in 2017, which has revolutionized the field, and was later tuned and adjusted to various needs of different tasks. Non-mainstream languages, with Czech taken as a representative, on the other hand, are a little bit behind these efforts and tend to use lighter or heuristic methods. With the new CzeGPT-2 model and abstractive summarizer, we would like to take a step forward detailing the process of training a GPT-2 generative transformer model for a new language with a comprehensive evaluation of the task of Czech summarization and pointing out the benefits of this approach. We also present an in-depth analysis of the errors in generated summaries, allowing to locate the model’s weak spots.},
Související projekty:

Používáte starou verzi internetového prohlížeče. Doporučujeme aktualizovat Váš prohlížeč na nejnovější verzi.