This is a fine-tuned version of T5 FLAN LARGE (783M) on English in particular on the public dataset spider for text-toSQL.
To initialize the model:
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("MRNH/flan-t5-large-PLsql")
Use the tokenizer:
tokenizer = T5ForConditionalGeneration.from_pretrained("MRNH/flan-t5-large-PLsql")
input = tokenizer("<question> "+sentence["db_id"]+" </question> "+sentence["question"],
text_target=sentence["query"], return_tensors='pt')
To generate text using the model:
output = model.generate(input["input_ids"],attention_mask=input["attention_mask"])
Training of the model is performed using the following loss computation based on the hidden state output h:
h.logits, h.loss = model(input_ids=input["input_ids"],
attention_mask=input["attention_mask"],
labels=input["labels"])
- Downloads last month
- 15
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.