--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: bert-base-uncased-finetuned-squad2 results: [] pipeline_tag: question-answering metrics: - exact_match - f1 language: - en --- # bert-base-uncased-finetuned-squad2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.3537 ## Model description BERTbase fine-tuned on SQuAD 2.0 : Encoder-based Transformer Language model, pretrained with Masked Language Modeling and Next Sentence Prediction. Suitable for Question-Answering tasks, predicts answer spans within the context provided. Training data: Train-set SQuAD2.0 Evaluation data: Validation-set SQuAD2.0 Hardware Accelerator used: GPU Tesla T4 ## Intended uses & limitations For Question-Answering - question = "How many programming languages does BLOOM support?" context = "BLOOM has 176 billion parameters and can generate text in 46 languages natural languages and 13 programming languages." from transformers import pipeline question_answerer = pipeline("question-answering", model="IProject-10/bert-base-uncased-finetuned-squad2") question_answerer(question=question, context=context) {{ direct_use | default("[question-answering]", true)}} {{ downstream_use | default("[question-answering]", true)}} ## Results Evaluation on SQuAD 2.0 validation dataset: ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.0122 | 1.0 | 8235 | 1.0740 | | 0.6805 | 2.0 | 16470 | 1.0820 | | 0.4542 | 3.0 | 24705 | 1.3537 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.2 - Tokenizers 0.13.3