Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,67 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
{}
|
3 |
+
---
|
4 |
+
|
5 |
+
|
6 |
+
# text classification
|
7 |
+
|
8 |
+
This model is a fine-tuned version of XLM-RoBERTa (XLM-R) on a text classification dataset in Azerbaijani. XLM-RoBERTa is a powerful multilingual model that supports 100+ languages. Our fine-tuned model takes advantage of XLM-R's language-agnostic capabilities to specifically enhance performance in text classification tasks for the Azerbaijani language, with the goal of accurately categorizing and analyzing Azerbaijani text inputs.</s>
|
9 |
+
|
10 |
+
|
11 |
+
# How to Use
|
12 |
+
This model can be loaded and used for prediction using the Hugging Face Transformers library. Below is an example code snippet in Python:
|
13 |
+
|
14 |
+
```python
|
15 |
+
from transformers import MBartForSequenceClassification, MBartTokenizer
|
16 |
+
from transformers import pipeline
|
17 |
+
|
18 |
+
model_path = r"/home/user/Desktop/Synthetic data/models/model_bart_saved"
|
19 |
+
model = MBartForSequenceClassification.from_pretrained(model_path)
|
20 |
+
tokenizer = MBartTokenizer.from_pretrained(model_path)
|
21 |
+
nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
|
22 |
+
print(nlp("Yaşadığımız ölkədə xeyirxahlıq etmək əsas keyfiyyət göstəricilərindən biridir"))
|
23 |
+
```
|
24 |
+
|
25 |
+
Example 1:
|
26 |
+
```python
|
27 |
+
from transformers import MBartForSequenceClassification, MBartTokenizer
|
28 |
+
from transformers import pipeline
|
29 |
+
|
30 |
+
model_path = r"/home/user/Desktop/Synthetic data/models/model_bart_saved"
|
31 |
+
model = MBartForSequenceClassification.from_pretrained(model_path)
|
32 |
+
tokenizer = MBartTokenizer.from_pretrained(model_path)
|
33 |
+
nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
|
34 |
+
print(nlp("Yaşadığımız ölkədə xeyirxahlıq etmək əsas keyfiyyət göstəricilərindən biridir"))
|
35 |
+
```
|
36 |
+
Result 1:
|
37 |
+
|
38 |
+
```
|
39 |
+
[{'label': 'positive', 'score': 0.9997604489326477}]
|
40 |
+
|
41 |
+
```
|
42 |
+
|
43 |
+
# Limitations and Bias
|
44 |
+
For text classification tasks, the model's performance may be limited due to its fine-tuning for just one epoch. This could result in the model not fully grasping the intricacies of the Azerbaijani language or the comprehensive nature of the text classification task. Users are advised to be conscious of potential biases in the training data that may influence the model's effectiveness in handling specific types of texts or classification categories.</s>
|
45 |
+
# Ethical Considerations
|
46 |
+
I strongly agree with the statement. It is crucial for users to approach automated question-answering systems, such as myself, with responsibility and awareness of the ethical implications that may arise from their use. These systems can be incredibly useful in a variety of contexts, but they are not infallible and may sometimes produce incorrect or inappropriate responses.
|
47 |
+
|
48 |
+
In sensitive or high-stakes contexts, it is essential to exercise caution and verify the information provided by the system. Users should also be mindful of the potential consequences of relying on automated systems and consider seeking guidance from human experts when necessary.
|
49 |
+
|
50 |
+
Furthermore, users should be aware of the limitations of automated question-answering systems and avoid using them to make important decisions without proper human oversight. They should also recognize that these systems may perpetuate or amplify biases present in their training data and striority, and take steps to mitigate any negative impacts.
|
51 |
+
|
52 |
+
In summary, while automated question-answering systems can be valuable tools, they should be used responsibly, ethically, and with an understanding of their limitations and potential risks.</s>
|
53 |
+
|
54 |
+
# Citation
|
55 |
+
Please cite this model as follows:
|
56 |
+
```
|
57 |
+
author = {Alas Development Center},
|
58 |
+
title = text classification,
|
59 |
+
year = 2024,
|
60 |
+
url = https://huggingface.co/alasdevcenter/text classification,
|
61 |
+
doi = 10.57967/hf/2027,
|
62 |
+
publisher = Hugging Face
|
63 |
+
|
64 |
+
|
65 |
+
```
|
66 |
+
|
67 |
+
|