albert-base-v2-squad_v2

This model is a fine-tuned version of albert-base-v2 on the squad_v2 dataset.

Model description

This model is fine-tuned on the extractive question answering task -- The Stanford Question Answering Dataset -- SQuAD2.0.

For convenience this model is prepared to be used with the frameworks PyTorch, Tensorflow and ONNX.

Intended uses & limitations

This model can handle mismatched question-context pairs. Make sure to specify handle_impossible_answer=True when using QuestionAnsweringPipeline.

Example usage:

>>> from transformers import AutoModelForQuestionAnswering, AutoTokenizer, QuestionAnsweringPipeline
>>> model = AutoModelForQuestionAnswering.from_pretrained("squirro/albert-base-v2-squad_v2")
>>> tokenizer = AutoTokenizer.from_pretrained("squirro/albert-base-v2-squad_v2")
>>> qa_model = QuestionAnsweringPipeline(model, tokenizer)
>>> qa_model(
>>>    question="What's your name?",
>>>    context="My name is Clara and I live in Berkeley.",
>>>    handle_impossible_answer=True  # important!
>>> )
{'score': 0.9027367830276489, 'start': 11, 'end': 16, 'answer': 'Clara'}

Training and evaluation data

Training and evaluation was done on SQuAD2.0.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: tpu
  • num_devices: 8
  • total_train_batch_size: 256
  • total_eval_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3.0

Training results

key value
epoch 3
eval_HasAns_exact 75.3374
eval_HasAns_f1 81.7083
eval_HasAns_total 5928
eval_NoAns_exact 82.2876
eval_NoAns_f1 82.2876
eval_NoAns_total 5945
eval_best_exact 78.8175
eval_best_exact_thresh 0
eval_best_f1 81.9984
eval_best_f1_thresh 0
eval_exact 78.8175
eval_f1 81.9984
eval_samples 12171
eval_total 11873
train_loss 0.775293
train_runtime 1402
train_samples 131958
train_samples_per_second 282.363
train_steps_per_second 1.104

Framework versions

  • Transformers 4.18.0.dev0
  • Pytorch 1.9.0+cu111
  • Datasets 1.18.3
  • Tokenizers 0.11.6

About Us

Squirro Logo

Squirro marries data from any source with your intent, and your context to intelligently augment decision-making - right when you need it!

An Insight Engine at its core, Squirro works with global organizations, primarily in financial services, public sector, professional services, and manufacturing, among others. Customers include Bank of England, European Central Bank (ECB), Deutsche Bundesbank, Standard Chartered, Henkel, Armacell, Candriam, and many other world-leading firms.

Founded in 2012, Squirro is currently present in Zürich, London, New York, and Singapore. Further information about AI-driven business insights can be found at http://squirro.com.

Social media profiles:

Downloads last month
45
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train squirro/albert-base-v2-squad_v2

Evaluation results

  • eval_exact on The Stanford Question Answering Dataset
    self-reported
    78.817
  • eval_f1 on The Stanford Question Answering Dataset
    self-reported
    81.998
  • eval_HasAns_exact on The Stanford Question Answering Dataset
    self-reported
    75.337
  • eval_HasAns_f1 on The Stanford Question Answering Dataset
    self-reported
    81.708
  • eval_NoAns_exact on The Stanford Question Answering Dataset
    self-reported
    82.288
  • eval_NoAns_f1 on The Stanford Question Answering Dataset
    self-reported
    82.288