Impact of Methodological Choices on the Evaluation of Student Models

Varování

Publikace nespadá pod Filozofickou fakultu, ale pod Fakultu informatiky. Oficiální stránka publikace je na webu muni.cz.
Autoři

EFFENBERGER Tomáš PELÁNEK Radek

Rok publikování 2020
Druh Článek ve sborníku
Konference Artificial Intelligence in Education. AIED 2020. Lecture Notes in Computer Science, vol 12163.
Fakulta / Pracoviště MU

Fakulta informatiky

Citace
www https://doi.org/10.1007/978-3-030-52237-7_13
Doi http://dx.doi.org/10.1007/978-3-030-52237-7_13
Klíčová slova adaptive learning; student modeling; intelligent tutoring systems; introductory programming
Popis The evaluation of student models involves many methodological decisions, e.g., the choice of performance metric, data filtering, and cross-validation setting. Such issues may seem like technical details, and they do not get much attention in published research. Nevertheless, their impact on experiments can be significant. We report experiments with six models for predicting problem-solving times in four introductory programming exercises. Our focus is not on these models per se but rather on the methodological choices necessary for performing these experiments. The results show, particularly, the importance of the choice of performance metric, including details of its computation and presentation.
Související projekty:

Používáte starou verzi internetového prohlížeče. Doporučujeme aktualizovat Váš prohlížeč na nejnovější verzi.