roberta-base-finetuned-squad-v1
This model is a fine-tuned version of roberta-base on the squad dataset.
Model description
Given a context / content, the model answers to a question by searching the content and extracting the relavant information.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
Training results
- training loss: 0.77257
Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.3
- Downloads last month
- 11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for sooolee/roberta-base-finetuned-squad-v1
Base model
FacebookAI/roberta-baseEvaluation results
- f1 on SQUADself-reported92.296
- exact_match on SQUADself-reported86.045