question_classifier_model_v2
This model is a fine-tuned version of sophiaqho/question_classifier_model on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0522
Model description
The model is fine-tuned from DisilBERT
Intended uses & limitations
It can be used as a question classifier, it will output Label 1 if the input question is a "wh" or factoid question, otherwise, it will output Label 0 if the input is a yes/no question. Can be used as part of the input preprocessing for question answering, when the type of question needs to be predetermined.
Training and evaluation data
More information needed
Training procedure
Trained using questions from the SQuAD and BooolQ dataset.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
No log | 1.0 | 159 | 0.0591 |
No log | 2.0 | 318 | 0.0509 |
No log | 3.0 | 477 | 0.0522 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
- Downloads last month
- 15
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for sophiaqho/question_classifier_model_v2
Base model
distilbert/distilbert-base-uncased
Finetuned
sophiaqho/question_classifier_model