metadata
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: bert-base-uncased-finetuned-squad2
results:
- task:
name: Question Answering
type: question-answering
dataset:
type: squad_v2
name: SQuAD 2
config: squad_v2
split: validation
metrics:
- type: exact_match
value: 73.5029057525478
name: Exact-Match
- type: f1
value: 76.79224102466394
name: F1-score
language:
- en
pipeline_tag: question-answering
metrics:
- exact_match
- f1
Model description
BERTbase fine-tuned on SQuAD 2.0 : Encoder-based Transformer Language model, pretrained with Masked Language Modeling and Next Sentence Prediction.
Suitable for Question-Answering tasks, predicts answer spans within the context provided.
Language model: bert-base-uncased
Language: English
Downstream-task: Question-Answering
Training data: Train-set SQuAD 2.0
Evaluation data: Evaluation-set SQuAD 2.0
Hardware Accelerator used: GPU Tesla T4
Intended uses & limitations
For Question-Answering -
!pip install transformers
from transformers import pipeline
model_checkpoint = "IProject-10/bert-base-uncased-finetuned-squad2"
question_answerer = pipeline("question-answering", model=model_checkpoint)
context = """
🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration
between them. It's straightforward to train your models with one before loading them for inference with the other.
"""
question = "Which deep learning libraries back 🤗 Transformers?"
question_answerer(question=question, context=context)
Results
Evaluation on SQuAD 2.0 validation dataset:
exact: 73.5029057525478,
f1: 76.79224102466394,
total: 11873,
HasAns_exact: 73.46491228070175,
HasAns_f1: 80.05301580395327,
HasAns_total: 5928,
NoAns_exact: 73.5407905803196,
NoAns_f1: 73.5407905803196,
NoAns_total: 5945,
best_exact: 73.5029057525478,
best_exact_thresh: 0.9997851848602295,
best_f1: 76.79224102466425,
best_f1_thresh: 0.9997851848602295,
total_time_in_seconds: 209.65395342100004,
samples_per_second: 56.63141479692573,
latency_in_seconds: 0.01765804374808389
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.0122 | 1.0 | 8235 | 1.0740 |
0.6805 | 2.0 | 16470 | 1.0820 |
0.4542 | 3.0 | 24705 | 1.3537 |
This model is a fine-tuned version of bert-base-uncased on the squad_v2 dataset. It achieves the following results on the evaluation set:
- Loss: 1.3537
Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3