Etudes in Chinese-Hungarian corpus-based lexical acquisition

The paper reports on a series of experiments to extract matching lexical items from a 6.1 million segment corpus of movie subtitles in Mandarin Chinese and Hungarian, with the aim of expanding an existing bilingual dictionary. The challenges of data cleansing and tokenization are outlined, and the o...

Teljes leírás

Elmentve itt :
Bibliográfiai részletek
Szerző: Ugray Gábor
Testületi szerző: Magyar Számítógépes Nyelvészeti Konferencia (14.) (2018) (Szeged)
Dokumentumtípus: Könyv része
Megjelent: 2018
Sorozat:Magyar Számítógépes Nyelvészeti Konferencia 14
Kulcsszavak:Nyelvészet - számítógép alkalmazása
Online Access:http://acta.bibl.u-szeged.hu/59049
Leíró adatok
Tartalmi kivonat:The paper reports on a series of experiments to extract matching lexical items from a 6.1 million segment corpus of movie subtitles in Mandarin Chinese and Hungarian, with the aim of expanding an existing bilingual dictionary. The challenges of data cleansing and tokenization are outlined, and the outcome of word alignment, vector space embeddings, neural machine translation and two standard statistical approaches is presented. A bilingual concordance tool for end users, based on word alignments, is introduced. A quantitative and qualitative evaluation of the results finds that the new methods drastically outperform simple collocation extraction, but also shows that human judgement is indispensable before including vocabulary in a published dictionary.
Terjedelem/Fizikai jellemzők:247-259
ISBN:978-963-306-578-5