roberta-base for QA finetuned over community safety domain data
We fine-tuned the roBERTa-based model (https://huggingface.co/deepset/roberta-base-squad2) over LiveSafe community safety dialogue data for event argument extraction with the objective of question-answering.
Using model in Transformers
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "yirenl2/plm_qa"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the location of the incident?',
'context': 'I was attacked by someone in front of the bus station.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
- Downloads last month
- 21
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Dataset used to train yirenl2/plm_qa
Evaluation results
- Exact Match on squad_v2validation set self-reported0.000
- F1 on squad_v2validation set self-reported0.000
- total on squad_v2validation set self-reported11869.000