Pre-trained Language Models
The intuition behind pre-trained language models is to create a black box which understands the language and can then be asked to do any specific task in that language.
|ParsBERT is a monolingual language model based on Google’s BERT architecture. This model is pre-trained on large Persian corpora with various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words.|
|ParsPer is created by training the graph-based MateParser on the entire Uppsala Persian Dependency Treebank (UPDT) with a selected configuration.|
|Bolbolzaban/gpt2-persian is a generative language model that is trained with hyper parameters similar to standard gpt2-medium with two differences on 27GB of texts collected from different Farsi websites. More details|
|The model was trained based on Google’s ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, similarly to ParsBERT. More details|