squad-en-bert-base / README.md
zhufy's picture
Update README.md
413398c
metadata
language: English
task: extractive question answering
datasets: SQuAD 2.0
tags:
  - bert-base

Model Description

This model is for English extractive question answering. It is based on the bert-base-cased model, and it is case-sensitive: it makes a difference between english and English.

Training data

English SQuAD v2.0

How to use

You can use it directly from the 馃 Transformers library with a pipeline:

>>> from transformers.pipelines import pipeline
>>> from transformers import AutoTokenizer, AutoModelForQuestionAnswering

>>> tokenizer = AutoTokenizer.from_pretrained("zhufy/squad-en-bert-base")
>>> model = AutoModelForQuestionAnswering.from_pretrained("zhufy/squad-en-bert-base")
>>> nlp = pipeline("question-answering", model=model, tokenizer=tokenizer)

>>> context = "A problem is regarded as inherently difficult if its 
              solution requires significant resources, whatever the 
              algorithm used. The theory formalizes this intuition, 
              by introducing mathematical models of computation to 
              study these problems and quantifying the amount of 
              resources needed to solve them, such as time and storage.
              Other complexity measures are also used, such as the 
              amount of communication (used in communication complexity),
              the number of gates in a circuit (used in circuit 
              complexity) and the number of processors (used in parallel
              computing). One of the roles of computational complexity
              theory is to determine the practical limits on what 
              computers can and cannot do."
              
>>> question = "What are two basic primary resources used to 
              guage complexity?"

>>> inputs = {"question": question, 
            "context":context }
            
>>> nlp(inputs)

{'score': 0.8589141368865967,
 'start': 305,
 'end': 321,
 'answer': 'time and storage'}