shenbinqian's picture
Update README.md
c18b146 verified
|
raw
history blame
780 Bytes
metadata
license: cc-by-sa-4.0

Flair-abbr-roberta-pubmed-plos-filtered

This is a stacked model of embeddings from roberta-large, HunFlair pubmed models and character-level language models trained on PLOS, fine-tuning on the PLODv2 filtered dataset. It is released with our LREC-COLING 2024 publication (coming soon). It achieves the following results on the test set:

Results on abbreviations:

  • Precision: 0.8924
  • Recall: 0.9375
  • F1: 0.9144

Results on long forms:

  • Precision: 0.8750
  • Recall: 0.9225
  • F1: 0.8981