File size: 1,759 Bytes
fff459e 22d1543 fff459e 22d1543 fff459e 22d1543 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
---
library_name: transformers
license: mit
base_model: base-uncased
tags:
- bert
- fine-tuning
- text-classification
model-index:
- name: NLP_with_Disaster_Tweets
results:
- task:
type: text-classification
name: Text Classification
metrics:
- name: Accuracy
type: accuracy
value: 0.835
language:
- en
---
# Disaster Tweets Classification
This model is fine-tuned BERT for classifying whether a tweet is about a real disaster or not.
## Model Description
- Based on `bert-base-uncased`
- Fine-tuned for binary classification task
- Achieves 83.5% accuracy on validation set
- Trained on Kaggle's "Natural Language Processing with Disaster Tweets" competition dataset
## How to Use
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForSequenceClassification
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("real-jiakai/NLP_with_Disaster_Tweets")
model = AutoModelForSequenceClassification.from_pretrained("real-jiakai/NLP_with_Disaster_Tweets")
# Example usage
text = "There was a major earthquake in California"
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=512)
outputs = model(**inputs)
predicted_class = outputs.logits.argmax(-1).item()
```
## License
This model is licensed under the [MIT](https://opensource.org/license/mit) License.
## Citation
If you use this model in your work, please cite:
```
@misc{NLP_with_Disaster_Tweets,
author = {real-jiakai},
title = {NLP_with_Disaster_Tweets},
year = {2024},
url = {https://huggingface.co/real-jiakai/NLP_with_Disaster_Tweets},
publisher = {Hugging Face}
}
```
|