Detecting Spam in Web Corpora

This publication doesn't include Faculty of Arts. It includes Faculty of Informatics. Official publication website can be found on



Year of publication 2012
Type Article in Proceedings
Conference 6th Workshop on Recent Advances in Slavonic Natural Language Processing
MU Faculty or unit

Faculty of Informatics

Field Informatics
Keywords spam detection; web corpora; n-gram
Description To increase the search result rank of a website, many fake websites full of generated or semigenerated texts have been made in last years. Since we do not want this garbage in our text corpora, this is a becoming problem. This paper describes generated texts observed in the recently crawled web corpora and proposes a new way to detect such unwanted contents. The main idea of the presented approach is based on comparing frequencies of n-grams of words from the potentially forged texts with n-grams of words from a trusted corpus. As a source of spam text, fake webpages concerning loans from an English web corpus as an example of data aimed to fool search engines were used. The results show this approach is able to detect properly certain kind of forged texts with accuracy reaching almost 70 %.
Related projects: