1/3/2024 0 Comments Github perian daata![]() ![]() Then, the dropout rate was decreased as the learning rate was decreasing due to the plateau to reach the minimum loss values.Scorer_path = '/content/kenlm-persian.scorer', The model was trained with the following configuration:.Tranfer learning from the STT English model was enabled (with drop_source_layers=2).Training was stopped at 1896648 steps after optimizing the model.Learning_rate was increased for the second run from 0.00012 to 0.0004. The model was trained 2 separate times with force_initialize_learning_rate.Then, the corpus was normalized, tokenized, and cleaned using the persianify function and _HTMLStripper in the notebook.The raw text corpus was obtained from here.The generated validated.csv was passed to auto_input_dataset=metadata in the configuation step.Using STT's bin/import_cv2.py script, the data was matched to the alphabet converted to CSV files.The Common-Voice dataset was obtained from here.persian_stt.ipynb contains a notebook that demonstrates how to preprocess the data.Approaches to uncertainty and variability.Preprocess in embedded in the file.This is a Persian DeepSpeech model that is currently in development. If you want to make any changes in training the model including using F1CE loss function or using different hyperparameteres, change the related files which in this instance, they are hyperparameteres.py and f1ce_loss.py.įurthermore, the feature extraction is not embedded in the main model and you need to use methods in feature_extraction.py file to add the features at the end of each sample. |_ multilabel: files to train multilabel classifier ![]() |_ data: dictionary used to detect mispelled words |_ models: files to create binary classifiers |_ modified datasets: result of dataset modifier notebook |_ main dataset: includes EmoPars and ArmanEmo datasets |_ dataset modifier: notebook used to create datasets using thresholds or removing uncertain samples |_ augmented datasets: datasets with augmented samples |_ augmentation: notebook used for data augmentation Our model reaches a Macro-averaged F1-score of 0.81 and 0.76 on ArmanEmo and EmoPars, respectively, which are new state-of-the-art results in these benchmarks. In addition, we provide a new policy for selecting data from EmoPars, which selects the high-confidence samples as a result, the model does not see samples that do not have specific emotion during training. Moreover, feature selection is used to enhance the models' performance by emphasizing the text's specific features. Throughout this analysis, we use data augmentation techniques, data re-sampling, and class-weights with Transformer-based Pretrained Language Models(PLMs) to handle the imbalance problem of these datasets. In this paper, we evaluate EmoPars and compare them with ArmanEmo. These datasets, especially EmoPars, are suffering from inequality between several samples between two classes. EmoPars and ArmanEmo are two new human-labeled emotion datasets for the Persian language. With the spread of social media, different platforms like Twitter have become data sources, and the language used in these platforms is informal, making the emotion detection task difficult. Detecting emotion can help us in different fields, including opinion mining. ![]() Persian Emotion Detection using ParsBERT and Imbalanced Data Handling Approaches AbstractĮmotion recognition is one of the machine learning applications which can be done using text, speech, or image data gathered from social media spaces. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |