Statistical language models within the algebra of weighted rational languages
Statistical language models are an important tool in natural language processing. They represent prior knowledge about a certain language which is usually gained from a set of samples called a corpus. In this paper, we present a novel way of creating N-gram language models using weighted finite auto...
Elmentve itt :
Szerzők: | |
---|---|
Testületi szerző: | |
Dokumentumtípus: | Cikk |
Megjelent: |
2009
|
Sorozat: | Acta cybernetica
19 No. 2 |
Kulcsszavak: | Számítástechnika, Kibernetika |
Tárgyszavak: | |
Online Access: | http://acta.bibl.u-szeged.hu/12868 |
Tartalmi kivonat: | Statistical language models are an important tool in natural language processing. They represent prior knowledge about a certain language which is usually gained from a set of samples called a corpus. In this paper, we present a novel way of creating N-gram language models using weighted finite automata. The construction of these models is formalised within the algebra underlying weighted finite automata and expressed in terms of weighted rational languages and transductions. Besides the algebra we make use of five special constant weighted transductions which rely only on the alphabet and the model parameter N. In addition, we discuss efficient implementations of these transductions in terms of virtual constructions. |
---|---|
Terjedelem/Fizikai jellemzők: | 313-356 |
ISSN: | 0324-721X |