On Eliminating Inductive Biases of Deep Language Models

Warning

This publication doesn't include Faculty of Arts. It includes Faculty of Informatics. Official publication website can be found on muni.cz.
Authors

ŠTEFÁNIK Michal

Year of publication 2021
Type Appeared in Conference without Proceedings
MU Faculty or unit

Faculty of Informatics

Citation
Description This poster outlines problems of modern neural language models with out-of-domain performance. It suggests that this might be a consequence of narrow model specialization. In order to eliminate this flaw, it suggests two main directions of future work: 1. Introduction of evaluative metrics can identify out-of-domain generalization abilities, while 2. Objective approach adjusts the training objective to respect the desired generalization properties of the system.
Related projects:

You are running an old browser version. We recommend updating your browser to its latest version.