parkervg/destt5-schema-prediction

Fine-tuned weights for the schema prediction model described in Correcting Semantic Parses with Natural Language through Dynamic Schema Encoding, based on t5-large.

Training Data

The model has been fine-tuned on the 7,481 training examples in the SPLASH interactive semantic parsing dataset.

Training Objective

This model was initialized with t5-large and fine-tuned with the text-to-text generation objective.

As this model works in the interactive setting, we utilize the standard text2sql features such as question and db_schema, in addition to feedback and incorrect_parse.

[question] || [incorrect_parse] || [db_id] | [table] : [column] ( [content] , [content] ) , [column] ( ... ) , [...] | [table] : ... | ... || [feedback]

The model then attempts to predict those schema items that appear in the final gold SQL query, prefaced by the db_id.

[db_id] | [table] : [column] ( [content] , [content] ) , [column] ( ... ) , [...] | [table] : ...

Performance

This model achieves 88.98% F1 score in identifying schema items on the SPLASH test set.

When combined with the destt5-text2sql model, it achieves 53.43% correction accuracy (exact-match) on the SPLASH test set.

References

  1. Correcting Semantic Parses with Natural Language through Dynamic Schema Encoding

  2. DestT5 codebase

  3. Speak to your Parser: Interactive Text-to-SQL with Natural Language Feedback

Citation

@inproceedings{glenn2023correcting,
  author = {Parker Glenn, Parag Pravin Dakle, Preethi Raghavan},
  title = "Correcting Semantic Parses with Natural Language through Dynamic Schema Encoding",
  booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI",
  publisher = "Association for Computational Linguistics",
  year = "2023"
}
Downloads last month
4
Safetensors
Model size
738M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.